perm filename CHAP2[4,KMC]5 blob
sn#034775 filedate 1973-04-06 generic text, type T, neo UTF8
00100 CHAPTER 2--EXPLANATIONS AND MODELS
00200
00300
00400 It is perhaps as difficult to explain scientific explanation as it
00500 is to explain anything else. The explanatory practices of different
00600 sciences differ widely but they all share the purpose of someone
00700 attempting to answer someone else's why-how-what-etc. questions about
00800 a situation, event, episode, object or phenomenon. Thus explanation implies a
00900 dialogue whose participants share some interests, beliefs, and values.
01000 A consensus must exist about admissable and appropriate questions and answers. The participants
01100 must agree on what is a sound and reasonable question and what is a
01200 relevant, intelligible, and (believed) correct answer.
01300 The explainer tries to satisfy a questioner's curiosity by making
01400 comprehensible why something is the way it is. The answer may be a
01500 definition, an example, a synonym, a story, a theory, a model-description, etc.
01600 The answer satisfies curiosity by settling belief. Nnaturally the task of
01700 satifying the curiosity of a five year old boy is different from that
01800 of satisfying a forty year old psychiatrist.
01900 Suppose a man dies and a questioner (Q) asks an expainer (E):
02000 Q: Why did the man die?
02100 One answer might be:
02200 E: Because he took cyanide.
02300 This explanation might be sufficient to satisfy Q's curiosity and he
02400 stops asking further questions. Or he might continue:
02500 Q: Why did the cyanide kill him?
02600 and E replies:
02700 E: Anyone who ingests cyanide dies.
02800 This explanation appeals to a universal generalization under which
02900 is subsumed the particular fact of this man's death. Subsumptive explanations
03000 satisfy some questioners but not others who, for example, might want to
03100 know about the physiological mechanisms involved.
03200 Q: How does cyanide work in killing people?
03300 E: It stops respiration so one dies from lack of oxygen.
03400 If Q has biochemical interests he might inquire further:
03500 Q: What is cyanide's mechanism of drug action on the respiratory center?
03600 And so on, since there is no bottom to the questions which might be asked.
03700 Nor is there a top:
03800 Q: Why did the man take cyanide?
03900 E: Because he was depressed.
04000 Q: What was he depressed about?
04100 E: He lost his job.
04200 Q: How did that happen?
04300 E: The aircraft company let go most of their engineers because of the cut-back in defense contracts.
04400 Explanations are always incomplete because the top and bottom can be indefinitely
04500 extended and endless questions can be asked at each level.
04600 Just as the participants in explanatory dialogues
04700 decide what is taken to be problematic, so they also determine the termini of
04800 questions and answers. Each discipline has its characteristic stopping points.
04900 In explanatory dialogues there exist larger and smaller constellations
05000 to refer to which are taken for granted as a nonproblematic background.
05100 Hence in considering the function of paranoid thought `it goes without saying',
05200 that is, it transcends this particular field of function to say
05300 that a living teleonomic system as the larger constellation strives for
05400 maintenance and expansion of its life using smaller oriented, informed
05500 and constructive subprocesses. Also it goes without saying that at a lower
05600 level ion transport takes place through nerve-cell membranes. Every function
05700 of an organism can be viewed a governing a subfunction beneath and
05800 depending on a transfunction above which calls it into play for a purpose.
05900 Just as there are many alternative ways of describing, there are many
06000 alternative ways of explaining. An explanation is geared to some level
06100 of what the dialogue participants take to be the fundamental structures
06200 and processes under consideration. Since in psychiatry we cope with
06300 patients' problems using mainly symbolic-conceptual techniques,(it is true
06400 that the pill, the knife, and electricity are also still available.),
06500 we are interested in aspects of human conduct which can be
06600 explained, understood, and modified at a symbol-processing level. Hence I shall
06700 attempt to explain paranoid conversational interactions by describing
06800 in some detail a simulation of paranoid interview behavior , having in
06900 mind an audience of mental health professionals and colleagues in fields
07000 of psychiatry, psychology, artificial intelligence, linguistics and philosophy.
07100 Symbol processing explanations postulate an underlying intentionalistic
07200 structure of hypothetical mechanisms, functions or strategies, goal-directed symbol-processing
07300 procedures, having the power to produce and being responsible for
07400 the manifest phenomena. In this ethogenic (generating behavior, Harre[ ]) approach the term "mechanism"
07500 is not used in the classical mechanical sense of the effects of forces on particles obeying laws of
07600 motion. Nor is it used in the sense of a mechanical contrivance such as a clock or an auto.
07700 Instead it is used here , and throughout the monograph,in the more general sense of modus operandi as
07800 in the mechanism for electing a president or the mechanism of evolutionary
07900 change. Thus I shall avoid the terms "mechanical" and "mechanistic" in order
08000 to avoid metaphors and images of Newtonian physics and contrivances. As will become clear,
08100 this ethogenic viewpoint uses the terms "mechanisms", "functions", "procedures"
08200 and "strategies" as roughly synonoymous.
08300
08400 2.2 Symbolic models
08500 An algorithm composed of symbolic computational
08600 procedures converts input symbolic structures into output symbolic
08700 structures according to certain principles. The modus operandi
08800 of a symbolic model is simply the workings of an algorithm when run on
08900 a computer. At this level of explanation, to answer `why?' means to provide
09000 an algorithm which makes explicit how symbolic structures go together,
09100 how they are organized to work to generate patterns of manifest phenomena.
09200
09300 To simulate the input-output behavior of a system using symbolic
09400 computational procedures, we construct a model which produces I/O
09500 behavior resembling that of the subject system being simulated. The
09600 resemblance is achieved through the workings of an inner postulated
09700 structure in the form of an algorithm, an organization of intentionalistic
09800 symbol processing procedures which are responsible for the characteristic
09900 observable behavior at the input-output level. Since we do not know the
10000 structure of the `real' simulative mechanisms used by the mind-brain,
10100 our postulated structure stands as an imagined theoretical analogue,
10200 a possible and plausible organization of mechanisms analogous to the
10300 unknown mechanisms and serving as an attempt to explain the workings
10400 of the system under study. A simulation model is thus deeper than a
10500 pure black-box explanation because it postulates functionally equivalent
10600 mechanisms inside the box to account for observable patterns of I/O
10700 behavior. A simulation model constitutes an interpretive explanation
10800 in that it makes intelligible the connections between external input
10900 internal states and output by postulating intervening symbol-processing procedures operating
11000 between symbolic input and symbolic output. An intelligible description
11100 of the model should make clear why and how it reacts as it does under
11200 various circumstances.
11300 To cite a universal generalization to explain an individuals behavior
11400 is unsatisfactory to a questioner who is interested in what powers and
11500 liabilities are latent behind manifest phenomena. To say `x is nasty
11600 because x is paranoid and all paranoids are nasty' may be relevant,
11700 intelligible and correct but it does not cite a structure which can account
11800 for `nasty' behavior as a consequence of input and internal states of
11900 a system. A model explanation specifies particular antecedants and mechanisms
12000 through which antecedants generate the phenomena. This ethogenic approach to
12100 explanation assumes perceptible phenomena display the regularities and
12200 irregularities they do because of the nature of a (currently) imperceptible
12300 and inaccessible underlying structure.
12400 When attempts are made to explain human behavior, principles in
12500 addition to those accounting for the natural order are invoked. `Nature
12600 entertains no opinions about us' said Nietsche but human natures do and
12700 therein lies a source of complexity for the understanding of human nature.
12800 Until the first quarter of the 20th century, natural sciences have been guided by the Newtonian ideal
12900 of perfect process knowledge about inanimate objects whose behavior can
13000 be subsumed under lawlike generalizations. When a deviation from a law was
13100 noticed,it was the law which was modified, since by definition physical objects do not have the power to break laws.
13200 When the planet Mercury was observed to deviate from the orbit predicted
13300 by Newtonian theory, no one accused the planet of being an intentional agent
13400 breaking the law; something was wrong with the theory. Subsumptive explanation is the acceptable norm in physics
13500 but it is seldom satisfactory in accounting for the behavior
13600 of living intentionalistic systems. In considering the behavior of falling bodies
13700 no one nowadays follows the Aristotelian pattern of attributing an intention
13800 to fall to the object in question. But in the case of living systems, especially
13900 ourselves, our ideal explanatory practice remains Aristotelian in utilizing
14000 a concept of intention.(Aristotle was not wrong about everything).
14100 Consider a man participating in a high-diving contest. In falling towards
14200 the water he falls at the rate of 32 feet per second per second. Viewing
14300 the man simply as a falling body, we explain his rate of fall by appealing to a physical
14400 law. Viewing the man as a human intentionalistic agent, we explain his dive as the result
14500 of an intention to dive in a cetain way in order to win the diving contest.
14600 His action (in contrast to mere movement) involves an intended following
14700 of certain conventional rules for what is judged by humans to constitute, say,
14800 a swan dive. Suppose part way down he chooses to change his position in
14900 mid-air and enter the water thumbing his nose at the judges. He cannot break
15000 the law of falling bodies but he can break the rules of diving and make a
15100 gesture which expresses disrespect and which he believes will be interpreted
15200 as such by the onlookers. Our diver breaks a rule for diving but follows
15300 another rule which prescribes gestural action for insulting behavior.
15400 To explain the actions of diving and nose-thumbing, we
15500 would appeal, not to laws of natural order, but to an additional order, to
15600 principles of human order, superimposed on laws of natural order and which
15700 take into account (1)standards of appropriate action in certain situations
15800 and (2) the agents inner considerations of intention, belief and value
15900 which he finds compelling from his point of view.
16000 In this type of explanation the explanandum, that which is being explained
16100 is the agent's informed actions, not simply his movements. When a human
16200 agent performs an action in a situation, we can ask:(1) is the action
16300 appropriate to that situation and if not, why did the agent believe his
16400 action to be called for.
16500 As will be shown, symbol-processing explanations rely on concepts
16600 of action, intention, belief, affect, preference, etc. These terms are
16700 close to the terms of ordinary language as is characteristic of early
16800 stages of explanations. It is also important to note that such terms are commonly utilized
16900 in describing computer algorithms in which final causes guide efficient causes. In
17000 an algorithm these ordinary terms can be explicitly defined and
17100 represented.
17200 Psychiatry deals with the practical concerns of inappropriate action,
17300 belief, etc. on the part of a patient. His behavior may be inappropriate
17400 to the onlooker since it represents a lapse from the expected, a
17500 contravention of the human order. It may even appear this way to the
17600 patient in monitoring and directing himself.But sometimes, as in severe cases of the paranoid mode
17700 the patient's behavior does not appear anomalous to himself. He maintains
17800 that anyone who understands his point of view, who conceptualizes
17900 situations as he does from the inside, would consider his outer behavior
18000 appropriate and justified. What he does not understand or accept is
18100 that his inner conceptualization is mistaken and represents a misinterpretation
18200 of the events of his experience.
18300 The model to be presented in the sequel constitutes an attempt to
18400 explain some regularities and particular occurrences of conversational
18500 paranoid phenomena observable in the clinical situation of a psychiatric
18600 interview. The explanation is at the symbol-processing level of
18700 linguistically communicating agents and is cast in the form of a dialogue
18800 algorithm. Like all explanations it is only partially accurate, incomplete
18900 and does not claim to represent the only conceivable structure of mechanisms.
19000
19100 2.3 The nature of algorithms
19200
19300 Theories can be presented in various forms such as natural language
19400 assertions, mathematical equations and computer programs. To date most
19500 theoretical explanations in psychiatry and psychology have consisted
19600 of natural language essays with all their well-known vagueness and
19700 ambiguities.Many of these formulations have been untestable, not because
19800 relevant observations were lacking but because it was unclear what
19900 the essay was really saying. Clarity is needed.
20000 An alternative way of formulating psychological theories is now
20100 available in the form of ethogenic algorithms, computer programs, which have
20200 the virtue of being clear and explicit in their articulation and which
20300 can be run on a computer to test internal consistency and external correspondence with the data of observation.
20400 Since we do not know the `real' mind-brain algorithms,
20500 we construct a theoretical model which represents a partial
20600 paramorphic analogue. (See Harre, 1972). The analogy is at the symbol-
20700 processing level, not at the hardware level. A functional, computational
20800 or procedural equivalence is being postulated. The question then becomes
20900 one of determining the degree of the equivalence. Weak functional equivalence
21000 consists of indistinguishability at the outermost input-output level.
21100 Strong equivalence means correspondence at each inner I/O level, that is
21200 there exists a match not only between what is being done but how it is
21300 being done at a given level of operations.(These points will be discussed
21400 in greater detail in Chapter 3).
21500 An algorithm represents an organization of symbol-processing mechanisms or functions
21600 which represent an `effective procedure'. It is essential here to grasp this concept.
21700 An effective procedure consists of two ingredients:
21800 (1) A programming language in which procedural rules of behavior
21900 can be rigorously and unambiguously specified.
22000 (2) A machine processor which can rapidly and reliably carry out
22100 the processes specified by the procedural rules.
22200 The specifications of (1), written in a formally defined programming
22300 language, is termed an algorithm or program while (2) involves a computer
22400 as the machine processor, a set of deterministic physical mechanisms
22500 which can perform the operations specified in the algorithm. The
22600 algorithm is called `effective' because it actually works, performing
22700 as intended when run on the machine processor.
22800 It is worth remphasizing that a simulation model postulates
22900 procedures analogous to the real and unknown procedures. The analogy being
23000 drawn here is between specified processes and their generating systems.
23100 Thus
23200
23300 mental process computational process
23400 --------------:: ----------------------
23500 brain hardware computer hardware and
23600 and programs programs
23700 The analogy is not simply between computer hardware and brain wetware.
23800 We are not comparing the structure of neurons with the structure of
23900 transisitors; we are comparing the organization of symbol-processing
24000 procedures in an algorithm with symbol-processing procedures of the
24100 mind-brain. The central nervous system contains a representation of
24200 the experience of its holder. A model builder has a conceptual representation
24300 of that representation which he demonstrates in the form of an algorithm.
24400 Thus an algorithm is a demonstration of a representation of a representation.
24500 When an algorithm runs on a computer the postulated explanatory
24600 structure becomes actualized, not described. (To describe the model
24700 is to present , among other things, its embodied theory). A simulation model such as the
24800 one presented here can be interacted with by a person at the linguistic
24900 level as a communicating agent in the world. Its symbolic communicative behavior
25000 can be experienced in a concrete form by a human observer-actor.
25100 Thus it can be known by acquaintance, by first-hand knowledge, as well
25200 as by the second-hand knowledge of description.
25300 Since the algoritm is written in a programming language, it is hermetic
25400 and opaque except to a few people, who in general do not enjoy reading
25500 other people's code. Hence the intelligibility requirement for explanations
25600 must be met in other ways. In an attempt to open the model to scrutiny
25700 I shall describe the model in detail using diagrams and interview
25800 examples profusely.
26000
26100 2.4 ANALOGY
26200 I have stated that a simulation model of a symbolic system reproduces
26300 the behavior of that system at some input-output level. The reproduction
26400 is achieved through the operations of an algorithm which represents
26500 an organization of hypothetical symbol-processing strategies or procedures
26600 which have the ability to generate the I/O behavior of the processes
26700 under investigation.The algorithm must be an effective procedure, that is
26800 one which really works in the manner intended by the model-builders. In the model
26900 herein described our paranoid algorithm generates linguistic I/O behavior
27000 typical of patients whose thought processes are dominated by the paranoid mode.
27100 Given that the manifest outermost I/O behavior of the model is
27200 indistinguishable from the manifest outward I/O behavior of paranoid
27300 patients, does this imply that the hypothetical underlying processes used
27400 by the model are analogous to or the same as the underlying processes
27500 used by persons in the paranoid mode. This deep and thorny question
27600 should be approached with caution and only when we are first armed with some clear notions
27700 about analogy, similarity, faithful reproduction, indistinguishability and functional equivalence.
27800 In comparing two things (objects, systems or processes ) one can cite properties they
27900 have in common, properties they do not share and properties regarding which
28000 it is difficult to tell. No two things are exactly alike in every detail.
28100 If they were identical in respect to all their properties then they would be copies. If
28200 they were identical in every respect including their spatio-temporal
28300 location we would say we have only one thing instead of two. One can
28400 assert with some justification that a given thing is not similar to
28500 anything else in the world or it is similar to evrything else in the world
28600 depending upon how we cited properties.
28700 Similarity relations are used in processes of classification in which
28800 objects are grouped into classes , the classes then representing object-
28900 concepts. The members of a class of object-concepts resemble one another
29000 in sharing certain properties. The resemblance between members of the class
29100 is not exact or total. Members of a class are considered more or less alike
29200 and there exist degrees of resemblance. A classification may involve only single
29300 properties while a taxonomy seeks to classify things according to their
29400 structure or organization. Thus a simulation model contributes to taxonomy
29500 in that since model X is structurally analogous to its subject Y, Y is to be
29600 viewed as belonging to the same class as X.
29700 In an analogy a comparison is drawn between two things. `Newton did not
29800 show the cause of the apple falling but he showed a similitude bewteen the
29900 apple and the stars.'(D`Arcy Thompson). Huygens suggested an analogy between
30000 sound waves and light waves in order to understand something less well-understood
30100 (light)in terms of something better understood(sound).To account for species
30200 variation, Darwin postulated a mechanism of natural selection. He constructed
30300 an analogy from two sources, one from artificial selection as practiced
30400 by domestic breeders of animals and one from Malthus' theory of a competetion
30500 for existence in a population increasing geometrically while its resources
30600 increase arithmetically. Bohr's model of the atom offered an analogy between
30700 solar system and atom. These few well-known historical examples make vivid
30800 the role of analogies in theory construction. Such analogies are partial
30900 paramorphs (Harre,1971) in that two systems are compared for parallelisms
31000 and they are compared only in respect to certain properties which
31110 constitute the positive and neutral analogy. The negative analogy is ignored.
31111 Bohr's model of the atom as a miniature planetary system was
31200 not intended to suggest that electrons possessed color or that planets
31300 jumped out of their orbits.
31310 2.4 FUNCTIONAL EQUIVALENCE
31400 When human thought is the subject of a simulation model, we draw from
31500 two sources, symbolic computation and psychology, an analogy between
31600 systems known to be able to process symbols, persons and computers. The
31700 properties compared in the analogy are obviously not physical or substantive
31800 such as blood and wires, but functional and procedural. We want to assume
31900 that the not well- understood procedures of thought in a person are
32000 similar to the somewhat better understood procedures of symbol-processing
32100 which take place in a computer. The analogy is one of functional
32200 or procedural equivalence. If model and human are indistinguishable at a manifest
32300 I/O level, then they can be considered weakly equivalent. If they are
32400 indistinguishable at deeper and deeper I/O levels, then strong equivalence
32500 becomes achieved. (See Fodor,1968). How stringent and how deep are the
32600 demands for equivalence to be? Must there be point-to-point correspondences
32700 at every level? What is to count as a point and what are the levels?
32800 Procedures can be specified and ostensively pointed to in an algorithm
32900 but how can we point to the inaccessible symbolic processes in a person's head?
33000 Does a demonstration of functional equivalence constitute an explanation of observable
33100 behavior?
33200 In constructing an algorithm one puts together an organization
33300 of collaborating functions. (As mentioned, we use the terms `function',
33400 `procedure' and `strategy' interchangeably.) A function takes some symbolic
33500 structure as input and yields some other symbolic structure as output.
33600 Two computationally equivalent functions, having the same input and yielding
33700 the same output, can differ `inside' the function at the instruction level.
33800 Consider an elementary programming problem which students in symbolic
33900 computation are commonly asked to solve. Given a list L of symbols,
34000 L=(A B C D), as input, construct a function or procedure which will
34100 convert this list to the list RL in which the order of the symbols is
34200 reversed, i.e. RL=(D C B A). Here are some examples of functions which
34300 will carry out the operation of reversal. (They are written in the high-level
34400 programming language MLISP).
34500 REVERSE1 (L);
34600 BEGIN
34700 NEW RL;
34800 RETURN FOR NEW I IN L DO
34900 RL ← I CONS RL;
35000 END;
35100
35200 REVERSE2 (L);
35300 BEGIN
35400 NEW RL, LEN;
35500 LEN ← LENGTH (L);
35600 FOR NEW N ← 1 TO LEN DO
35700 RL[N] ← L [LEN - N + 1];
35800 RETURN RL;
35900 END;
36000 REVERSE3 (L);
36100 REVERSE3A (L,NIL);
36200
36300 REVERSE3A (L,RL);
36400 IF NULL L THEN RL
36500 ELSE REVERSE3A (CDR L, CAR L CONS RL);
36600 Each of these computational functions takes a list of symbols, L, as
36700 input and produces a new list, RL, in which the order of the symbols on the
36800 input list is reversed. It is at this I/O level that the functions can
36900 be said to be equivalent. Looking inside the functions one can see
37000 similarities as well as differences at the level of the individual
37100 instructions. For instance, REVERSE1 steps down the input list L, takes
37200 each symbol found and inserts it at the front of the new list RL. On the
37300 other hand, REVERSE2 counts the length of the input list L using another
37400 function called LENGTH which determines the length of a list. REVERSE2
37500 then uses index expressions on both sides of an assignment operator, ← ,
37600 (a) to obtain a position in the list RL, (b) to obtain a symbol in the list
37700 L and (c) to assign the symbol to that position in the reversed list RL.
37800 Notice that REVERSE1 and REVERSE2 are similar in that they use FOR loops
37900 while REVERSE3, which calls another function REVERSE3A, does not. REVERSE3A
38000 is different from all the others in that it contains an IF expression.
38100 Hence similariries and differences can be cited between functions as
38200 long as we are clear about levels and degrees of detail. The above-described
38300 functions are computationally equivalent at the input-output level since
38400 they take the same symbolic structures as input and produce the same
38500 symbolic output.
38600 2.5 Functional equivalence
38700 If we propose that an algorithm we have constructed is functionally
38800 equivalent to what goes on in humans when they process symbolic structures,
38900 how can we justify this position ? Indistinguishability tests at, say,
39000 the linguistic level provide evidence only for weak equivalence. We
39100 would like to be able to get inside the underlying processes in humans
39200 the way we can with an algorithm by inspecting its instructional code.
39300 The difficulty lies in identifying, making tangible and counting processes
39310 in human heads. Many experiments must be designed and carried out.
39400 We must have great patience with the neural sciences and psychology.
39500 In the meantime, besides weak equivalence and plausibility arguments,
39600 one can appeal to extra-evidential support from other
39700 relevant domains. One can offer analogies between what is known to go on at
39800 a molecular level in living organisms and what goes on in an algorithm.
39900 Foe example, a DNA molecule in the nucleus of a cell consists of an
40000 ordered sequence (list) of nucleotide bases (symbols) coded in triplets
40100 termed codons (words). Each element of the codon specifies which amino
40200 acid during protein synthesis is to be linked into the chain of polypeptides
40300 making up the protein. The codons function like instructions in a
40400 programming language. One codon is known to operate as a terminal symbol
40500 analogous to symbols in an algorithm which terminate the end of a list.
40600 If a stop codon appears in the middle of a sequence rather than at its
40700 normal terminal position, as in a point mutation, further protein
40800 synthesis is prevented. The polypeptide chain resulting is abnormal
40900 and may have lethal or trivial consequences for the organism depending
41000 on what other collaborating processes require to be handed over to them. Similarly
41100 in a algorithm. To use our previous programming example, the list L
41200 consisting of the symbols (A B C D) actually contains the terminal
41300 symbol NIL which is left unwritten because it is taken as a convention.
41400 If in reversing the list (A B C D NIL) the symbol NIL appeared in the
41500 middle of the list,i.e. (A B NIL C D), then the reversed list RL would
41600 contain only (B A) instead of the expected (D C B A) because
41700 the terminal symbol had been encountered. Such a result may be lethal
41800 or trivial to the algorithm depending on what other functions require
41900 as input from the reversing function. Each function in a algorithm
42000 is embedded in an organization of collaborating functions just as
42100 is the case in living organisms.
42200 We know that at the molecular level of living organisms there exist
42300 rules for processes such as serial progression along a nucleotide
42400 sequence which are analogous to stepping down a list in an algorithm.
42500 Further analogies can be made between point mutations in which DNA
42600 codons can be inserted, deleted, substituted or reordered and symbolic
42700 computation in which the same operations are commonly carried out.
42800 Such analogies are interesting as extraevidential support but obviously
42900 closer linkages are needed between the macro-level of thought processes
43000 and the micro-level of molecular information-processing .
43100 To obtain evidence for the acceptability of the model empirical tests
43200 are utilized in evaluation procedures. Such tests should also tell us
43300 which is the best among alternative models. Once we have the `best available'
43400 model can we be sure it is correct or verisimilar? We can never know with certainty. Theories
43500 and models have a short half-life as approximations and become superseded by better ones.